Excess Risk Bounds for Exponentially Concave Losses
نویسندگان
چکیده
The overarching goal of this paper is to derive excess risk bounds for learning from expconcave loss functions in passive and sequential learning settings. Exp-concave loss functions encompass several fundamental problems in machine learning such as squared loss in linear regression, logistic loss in classification, and negative logarithm loss in portfolio management. In batch setting, we obtain sharp bounds on the performance of empirical risk minimization performed in a linear hypothesis space and with respect to the exp-concave loss functions. We also extend the results to the online setting where the learner receives the training examples in a sequential manner. We propose an online learning algorithm that is a properly modified version of online Newton method to obtain sharp risk bounds. Under an additional mild assumption on the loss function, we show that in both settings we are able to achieve an excess risk bound of O(d log n/n) that holds with a high probability.
منابع مشابه
Lower and Upper Bounds on the Generalization of Stochastic Exponentially Concave Optimization
In this paper we derive high probability lower and upper bounds on the excess risk of stochastic optimization of exponentially concave loss functions. Exponentially concave loss functions encompass several fundamental problems in machine learning such as squared loss in linear regression, logistic loss in classification, and negative logarithm loss in portfolio management. We demonstrate an O(d...
متن کاملAverage Stability is Invariant to Data Preconditioning. Implications to Exp-concave Empirical Risk Minimization
We show that the average stability notion introduced by [12, 4] is invariant to data preconditioning, for a wide class of generalized linear models that includes most of the known exp-concave losses. In other words, when analyzing the stability rate of a given algorithm, we may assume the optimal preconditioning of the data. This implies that, at least from a statistical perspective, explicit r...
متن کاملFast rates with high probability in exp-concave statistical learning
We present an algorithm for the statistical learning setting with a bounded expconcave loss in d dimensions that obtains excess risk O(d log(1/δ)/n) with probability 1−δ. The core technique is to boost the confidence of recent in-expectation O(d/n) excess risk bounds for empirical risk minimization (ERM), without sacrificing the rate, by leveraging a Bernstein condition which holds due to exp-c...
متن کاملOptimal oracle inequalities for model selection
Abstract: Model selection is often performed by empirical risk minimization. The quality of selection in a given situation can be assessed by risk bounds, which require assumptions both on the margin and the tails of the losses used. Starting with examples from the 3 basic estimation problems, regression, classification and density estimation, we formulate risk bounds for empirical risk minimiz...
متن کاملAsymptotic Equivalence of Regularization Methods in Thresholded Parameter Space
High-dimensional data analysis has motivated a spectrum of regularization methods for variable selection and sparse modeling, with two popular methods being convex and concave ones. A long debate has taken place on whether one class dominates the other, an important question both in theory and to practitioners. In this article, we characterize the asymptotic equivalence of regularization method...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- CoRR
دوره abs/1401.4566 شماره
صفحات -
تاریخ انتشار 2014